1 research outputs found
Accelerating 2PC-based ML with Limited Trusted Hardware
This paper describes the design, implementation, and evaluation of Otak, a
system that allows two non-colluding cloud providers to run machine learning
(ML) inference without knowing the inputs to inference. Prior work for this
problem mostly relies on advanced cryptography such as two-party secure
computation (2PC) protocols that provide rigorous guarantees but suffer from
high resource overhead. Otak improves efficiency via a new 2PC protocol that
(i) tailors recent primitives such as function and homomorphic secret sharing
to ML inference, and (ii) uses trusted hardware in a limited capacity to
bootstrap the protocol. At the same time, Otak reduces trust assumptions on
trusted hardware by running a small code inside the hardware, restricting its
use to a preprocessing step, and distributing trust over heterogeneous trusted
hardware platforms from different vendors. An implementation and evaluation of
Otak demonstrates that its CPU and network overhead converted to a dollar
amount is 5.4385 lower than state-of-the-art 2PC-based works.
Besides, Otak's trusted computing base (code inside trusted hardware) is only
1,300 lines of code, which is 14.629.2 lower than the code-size in
prior trusted hardware-based works.Comment: 19 page